- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0002000001000000
- More
- Availability
-
12
- Author / Contributor
- Filter by Author / Creator
-
-
Song, Yuanqing (3)
-
Djurić, Petar M (2)
-
Djurić, Petar M. (1)
-
Dong, Penghao (1)
-
Liu, Yuhao (1)
-
Llorente, Fernando (1)
-
Mallipattu, Sandeep K. (1)
-
Ravishankar, Anand (1)
-
Yao, Shanshan (1)
-
Yu, Shangyouqiao (1)
-
Zhang, Zimeng (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available April 6, 2026
-
Ravishankar, Anand; Llorente, Fernando; Song, Yuanqing; Djurić, Petar M (, Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing)Free, publicly-accessible full text available April 6, 2026
-
Dong, Penghao; Song, Yuanqing; Yu, Shangyouqiao; Zhang, Zimeng; Mallipattu, Sandeep K.; Djurić, Petar M.; Yao, Shanshan (, Small)Abstract Lip‐reading provides an effective speech communication interface for people with voice disorders and for intuitive human–machine interactions. Existing systems are generally challenged by bulkiness, obtrusiveness, and poor robustness against environmental interferences. The lack of a truly natural and unobtrusive system for converting lip movements to speech precludes the continuous use and wide‐scale deployment of such devices. Here, the design of a hardware–software architecture to capture, analyze, and interpret lip movements associated with either normal or silent speech is presented. The system can recognize different and similar visemes. It is robust in a noisy or dark environment. Self‐adhesive, skin‐conformable, and semi‐transparent dry electrodes are developed to track high‐fidelity speech‐relevant electromyogram signals without impeding daily activities. The resulting skin‐like sensors can form seamless contact with the curvilinear and dynamic surfaces of the skin, which is crucial for a high signal‐to‐noise ratio and minimal interference. Machine learning algorithms are employed to decode electromyogram signals and convert them to spoken words. Finally, the applications of the developed lip‐reading system in augmented reality and medical service are demonstrated, which illustrate the great potential in immersive interaction and healthcare applications.more » « less
An official website of the United States government
